• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ ÄÄÇ»ÆÃÀÇ ½ÇÁ¦ ³í¹®Áö (KIISE Transactions on Computing Practices)

Á¤º¸°úÇÐȸ ÄÄÇ»ÆÃÀÇ ½ÇÁ¦ ³í¹®Áö (KIISE Transactions on Computing Practices)

Current Result Document : 5 / 351 ÀÌÀü°Ç ÀÌÀü°Ç   ´ÙÀ½°Ç ´ÙÀ½°Ç

ÇѱÛÁ¦¸ñ(Korean Title) ¸ð¹ÙÀÏ µð¹ÙÀ̽º¿¡¼­ °èÃþº° ÇÁ·Î¼¼¼­ ÇÒ´çÀ» ÅëÇÑ µö·¯´× ÇнÀ °¡¼Ó
¿µ¹®Á¦¸ñ(English Title) Accelerating Deep Network Training with Layer-specific Processor Allocation on Mobile Devices
ÀúÀÚ(Author) Çϵ¿ÈÖ   ±ÇÁø¼¼   ±èÇü½Å   Donghee Ha   Jinse Kwon   Hyungshin Kim                          
¿ø¹®¼ö·Ïó(Citation) VOL 27 NO. 06 PP. 0263 ~ 0272 (2021. 06)
Çѱ۳»¿ë
(Korean Abstract)
µö·¯´× °³ÀÎÈ­ ÀÀ¿ëÀº »ç¿ëÀÚ°¡ ¿øÇÏ´Â ¿ä±¸ »çÇ׿¡ ¸Â°Ô µö·¯´× ¸ðµ¨À» ÀçÇнÀÇØ¾ß ÇÑ´Ù. ±âÁ¸ÀÇ ¸ðµ¨ ÇнÀ ¹æ¹ýÀº ¼­¹ö¿¡¼­ ÇнÀÇÑ ¸ðµ¨À» ¸ð¹ÙÀÏ µð¹ÙÀ̽º·Î Àü¼ÛÇÑ´Ù. ±âÁ¸ ¹æ¹ýÀº °³ÀÎ Á¤º¸ À¯Ãâ, ¼­¹ö ¿î¿ë ºñ¿ë Áõ°¡ µîÀÇ ¹®Á¦¸¦ ¾ß±âÇÒ ¼ö ÀÖ´Ù. ÀÌ·± ¹®Á¦¸¦ ÇØ°áÇϱâ À§ÇØ ¸ð¹ÙÀÏ¿¡¼­ µö·¯´× ÇнÀ ¹æ¹ýÀ» Àû¿ëÇÑ´Ù. ÇÏÁö¸¸, ¸ð¹ÙÀÏ µð¹ÙÀ̽º´Â ÀÚ¿øÀÌ ºÎÁ·ÇÏ¿© µö·¯´× ÇнÀ ¼öÇàÀÌ ¾î·Æ´Ù. º» ³í¹®¿¡¼­´Â ¸ð¹ÙÀÏ CPU¿Í GPU¸¦ È¿À²ÀûÀ¸·Î »ç¿ëÇÏ¿© ¸ð¹ÙÀÏ µð¹ÙÀ̽º¿¡¼­ µö·¯´× ÇнÀ ¼Óµµ¸¦ Çâ»óÇÏ´Â ½Ã½ºÅÛÀ» Á¦¾ÈÇÑ´Ù. Á¦¾ÈÇÏ´Â ½Ã½ºÅÛÀº ¸ðµ¨ °èÃþº° ¿¬»ê ½Ã°£°ú ÇÁ·Î¼¼¼­ °£ µ¥ÀÌÅÍ Àü¼Û ½Ã°£À» ÇÁ·ÎÆÄÀϸµ ÇÑ´Ù. ÇÁ·ÎÆÄÀϸµ °á°ú¸¦ Åä´ë·Î µ¿Àû ÇÁ·Î±×·¡¹ÖÀ» ÀÌ¿ëÇÏ¿© ÇÁ·Î¼¼¼­¸¦ Ž»öÇÏ°í °¢ °èÃþ¿¡ ÃÖÀûÀÇ ÇÁ·Î¼¼¼­¸¦ ÇÒ´çÇÑ´Ù. 3°³ÀÇ Ä«Å×°í¸®·Î ÀÌ·ç¾îÁø Ä¿½ºÅÒ µ¥ÀÌÅ͸¦ CIFAR-10 À̹ÌÁö·Î »çÀü ÇнÀµÈ ¸ðµ¨À» ÀÌ¿ëÇÏ¿© ÀüÀÌ ÇнÀÇÏ¿´´Ù. Á¦¾ÈÇÏ´Â ¾Ë°í¸®ÁòÀ» ODROID-XU4¿Í Firefly RK3399 Plus¿¡¼­ ½ÇÇèÇÑ °á°ú, °¢°¢ 25.7%, 3.2% ¼º´É Çâ»óÀ» È®ÀÎÇÏ¿´´Ù
¿µ¹®³»¿ë
(English Abstract)
With the recent development in deep learning, the application of personalization has increase. Personalized deep learning models require initial training according to the user requirements. When the event of unseen data occurs, it is necessary to retrain and update the optimal model. Traditional methods send personal data to servers to create a model and send it to mobile devices. In this process, problems, such as leakage of private data, excessive network traffic, and an increase in server operating costs, may occur. To solve this problem, on-device learning is a well-known approach. However, mobile devices lack hardware resources. In this paper, we propose a method to reduce the training time for a mobile device by effectively utilizing Central Processing Unit(CPU) and Graphic Processing Unit(GPU). The proposed system profiles the computing and data transfer time. With the result of profiling and dynamic programming, the method searches the processor and allocates the optimal processor to each layer. Based on a pre-trained model with CIFAR-10, we apply transfer learning to train the custom data consisting of three categories faster than initial training. With two mobile devices (ODROID-XU4 and Firefly RK3399 Plus), the proposed method reduces the execution time by 25.7% and 3.2%, respectively
Å°¿öµå(Keyword) ¿Â-µð¹ÙÀ̽º ·¯´×   ÇÁ·Î¼¼¼­   ¸ð¹ÙÀÏ µð¹ÙÀ̽º   ÀüÀÌ ÇнÀ   µ¿Àû ÇÁ·Î±×·¡¹Ö   on-device learning   processor   mobile device   transfer learning   dynamic programming                 
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå